RNN with a Recurrent Output Layer for Learning of Naturalness

نویسندگان

  • Ján Dolinský
  • Hideyuki Takagi
چکیده

The behavior of recurrent neural networks with a recurrent output layer (ROL) is described mathematically and it is shown that using ROL is not only advantageous, but is in fact crucial to obtaining satisfactory performance for the proposed naturalness learning. Conventional belief holds that employing ROL often substantially decreases the performance of a network or renders the network unstable, and ROL is consequently rarely used. The objective of this paper is to demonstrate that there are cases where it is necessary to use ROL. The concrete example shown models naturalness in handwritten letters.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Multi-Task Learning for Prosodic Structure Generation Using BLSTM RNN with Structured Output Layer

Prosodic structure generation from text plays an important role in Chinese text-to-speech (TTS) synthesis, which greatly influences the naturalness and intelligibility of the synthesized speech. This paper proposes a multi-task learning method for prosodic structure generation using bidirectional long shortterm memory (BLSTM) recurrent neural network (RNN) and structured output layer (SOL). Unl...

متن کامل

A New Hybrid-parameter Recurrent Neural Networks for Online Handwritten Chinese Character Recognition

The recurrent neural network (RNN) is appropriate for dealing with temporal sequences. In this paper, we present a deep RNN with new features and apply it for online handwritten Chinese character recognition. Compared with the existing RNN models, three innovations are involved in the proposed system. First, a new hidden layer function for RNN is proposed for learning temporal information bette...

متن کامل

A Neuro-fuzzy Model for Nonlinear

A improved parallel Recurrent Neural Network (RNN) model and an improved dynamic Backpropagation (BP) method of its learning, are proposed. The RNN model is given as a two layer Jordan canonical architecture for both continuous and discrete-time cases. The output layer is of Feedforward type. The hidden layer is a recurrent one with self-feedbacks and full forward connections with the inputs. A...

متن کامل

Learning Input and Recurrent Weight Matrices in Echo State Networks

The traditional echo state network (ESN) is a special type of a temporally deep model, the recurrent network (RNN), which carefully designs the recurrent matrix and fixes both the recurrent and input matrices in the RNN. The ESN also adopts the linear output (or readout) units to simplify the leanring of the only output matrix in the RNN. In this paper, we devise a special technique that takes ...

متن کامل

Spiral Recurrent Neural Network for Online Learning

Autonomous, self* sensor networks require sensor nodes with a certain degree of “intelligence”. An elementary component of such an “intelligence” is the ability to learn online predicting sensor values. We consider recurrent neural network (RNN) models trained with an extended Kalman filter algorithm based on real time recurrent learning (RTRL) with teacher forcing. We compared the performance ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2007